Goto

Collaborating Authors

 dynamic context


Off-dynamics Conditional Diffusion Planners

Ng, Wen Zheng Terence, Chen, Jianda, Zhang, Tianwei

arXiv.org Artificial Intelligence

Offline Reinforcement Learning (RL) offers an attractive alternative to interactive data acquisition by leveraging pre-existing datasets. However, its effectiveness hinges on the quantity and quality of the data samples. This work explores the use of more readily available, albeit off-dynamics datasets, to address the challenge of data scarcity in Offline RL. We propose a novel approach using conditional Diffusion Probabilistic Models (DPMs) to learn the joint distribution of the large-scale off-dynamics dataset and the limited target dataset. To enable the model to capture the underlying dynamics structure, we introduce two contexts for the conditional model: (1) a continuous dynamics score allows for partial overlap between trajectories from both datasets, providing the model with richer information; (2) an inverse-dynamics context guides the model to generate trajectories that adhere to the target environment's dynamic constraints. Empirical results demonstrate that our method significantly outperforms several strong baselines. Ablation studies further reveal the critical role of each dynamics context. Additionally, our model demonstrates that by modifying the context, we can interpolate between source and target dynamics, making it more robust to subtle shifts in the environment.


CASPFormer: Trajectory Prediction from BEV Images with Deformable Attention

Yadav, Harsh, Schaefer, Maximilian, Zhao, Kun, Meisen, Tobias

arXiv.org Artificial Intelligence

Motion prediction is an important aspect for Autonomous Driving (AD) and Advance Driver Assistance Systems (ADAS). Current state-of-the-art motion prediction methods rely on High Definition (HD) maps for capturing the surrounding context of the ego vehicle. Such systems lack scalability in real-world deployment as HD maps are expensive to produce and update in real-time. To overcome this issue, we propose Context Aware Scene Prediction Transformer (CASPFormer), which can perform multi-modal motion prediction from rasterized Bird-Eye-View (BEV) images. Our system can be integrated with any upstream perception module that is capable of generating BEV images. Moreover, CASPFormer directly decodes vectorized trajectories without any postprocessing. Trajectories are decoded recurrently using deformable attention, as it is computationally efficient and provides the network with the ability to focus its attention on the important spatial locations of the BEV images. In addition, we also address the issue of mode collapse for generating multiple scene-consistent trajectories by incorporating learnable mode queries. We evaluate our model on the nuScenes dataset and show that it reaches state-of-the-art across multiple metrics


Dynamic Contexts for Generating Suggestion Questions in RAG Based Conversational Systems

Tayal, Anuja, Tyagi, Aman

arXiv.org Artificial Intelligence

When interacting with Retrieval-Augmented Generation (RAG)-based conversational agents, the users must carefully craft their queries to be understood correctly. Yet, understanding the system's capabilities can be challenging for the users, leading to ambiguous questions that necessitate further clarification. This work aims to bridge the gap by developing a suggestion question generator. To generate suggestion questions, our approach involves utilizing dynamic context, which includes both dynamic few-shot examples and dynamically retrieved contexts. Through experiments, we show that the dynamic contexts approach can generate better suggestion questions as compared to other prompting approaches.


InSitu: An Approach for Dynamic Context Labeling Based on Product Usage and Sound Analysis

Lyardet, Fernando (Technical University of Darmstadt) | Hadjakos, Aristotelis (Technical University of Darmstadt) | Szeto, Diego Wong (Technical University of Darmstadt)

AAAI Conferences

Smart environments offer a vision of unobtrusive interaction with our surroundings, interpreting and anticipating our needs. One key aspect for making environments smart is the ability to recognize the current context. However, like any human space, smart environments are subject to changes and mutations of their purposes and their composition as people shape their living places according to their needs. In this paper we present an approach for recognizing context situations in smart environments that addresses this challenge. We propose a formalism for describing and sharing context states (or situations) and an architecture for gradually introducing contextual knowledge to an environment, where the current context is determined on sensing people's usage of devices and sound analysis.